-- card: 24555 from stack: in.0 -- bmap block id: 0 -- flags: 0000 -- background id: 3797 -- name: -- part contents for background part 1 ----- text ----- From: mitch@well.UUCP (Mitchell Waite) Date: 2 Mar 88 05:33:04 GMT In article <495@apple.Apple.Com> winkler@Apple.COM (Dan Winkler) said: > As the author of the HyperTalk language and as an Apple employee, I am going to refrain from endorsing or criticizing any particular HyperCard book. However, I will say that there are some very poor ones out there that were thrown together in a matter of weeks and that are full of inaccuracies and examples that are not even syntactically correct (i.e. they were never tried). I feel there is a need for some informed criticism of these books in forums like this. I do not feel that the reviews I have seen so far have been informed. Folks, you shouldn't post a message to thousands of people around the world endorsing a book (or anything else) that you haven't thoroughly studied. I can't be the one to tell you that any particular book is dog meat, but I desperately want you to figure it out on your own and you haven't been doing too well so far. > I think it is pretty obvious why the newest books out on HyperTalk are so poor in quality but it may not be as clear to others, so here goes my 8 bits. As a computer book author and programmer, with over 70 books on the market, who started in 1974, I believe I am in a position to make some fairly accurate observations about the book publishing field. First, this phenomena of "quickly put together" computer books on new programming languages, operating systems, and software products has been going on for over 12 years. It started in 1974 when CP/M books came out and showed that there was a growing market for trade computer books (trade means in the book stores). Up till that point the most popular books where on CMOS and TTL (Don Lancaster was the Danny Goodman of electronics at that time. Audio books and Walter Jung where also hot). These first CP/M computer books where hastily written, often devoid of substance, inaccurate, and, in most cases, just plain worthless. But people bought them! And threw them away. And publishers grew out of the woodwork. They put colorful covers, titles that promised power and mastery overnight. Yet out of the over 100 CP/M books published, only three of four where any good. What happened was that over the years the good books continued to sell, and the others ended up collecting dust on the bookstore shelves, and got sent back to the publishers as "returns" (returns are like payments you didn't know you had to pay-publishers hate them). The the good books eventually sold well and some of these CP/M books are in fact still selling today! This phenomena was followed by every major product in the microcomputer market place: Apple //, Visicalc, BASIC, MS-DOS, IBM-PC, WordStar, 1-2-3, Pascal, C; each of these had tons of early out junk books, followed later by more solid tutorials, application and reference titles. The forces at work that produce these early books are interesting. In general the software creator (be it Digital Research, Lotus, whoever) do a crummy job of documentation. It is done last and fast. This leaves a giant hole for improvements in the documentation and in some cases the manufacturer never fills the hole (CP/M and MS-DOS manuals are an example, the best selling computer book in the world is Using 1-2-3 and its not from Lotus. We won't even touch on UNIX books). So the computer book publishers see a great opportunity with these poorly documented products. This would not be such a bad situation except many of the publishers still are driven by the fact that in the beginning people will buy anything, and beginners will not know if its a good or a bad book, and these publishers will automatically gain market share if they move quickly. And quickly they move. In fact the quickest write the book BEFORE the product is out. The Goodman book is a classic case of this out-the-door-quick-for-market-share phenomena for HyperCard (published by Bantam who is a late starter in computer books). The Shafer book (published by Sams) is a classic case of out-the-door-quick-for-market-share for HyperTalk. In the first case, Apple knew it was behind on Wildcard and needed to get a product out the door fast. This explains the proliferation of quirky bugs still in the product (and I love HyperCard still so don't take this wrong). They worked hard on there own manual, but alas not everyone is satisfied with it. Plus at least 50 to 75% of the HyperCard users get HyperCard someway other then by purchasing it, and don't ever see a manual. Apple's decision to bundle HyperCard added more fuel to this last possibility. They made a deal early with Bantam to bring in Goodman and let him get the first book out. This was a great deal for Bantam, for Apple, for Goodman, and really it wasn't so bad for the book buyer because at least the book wasn't written in a vacuum, and had an opportunity to be looked over by the tough eye of Apple. And as a HyperCard training book, Goodman's title is not that bad. But as a HyperTalk primer, it is the pits. And that is understandable since Goodman is not a programmer and never admits he is one. Overall I think more people win then loose here and don't have a lot of gripes about the Goodman book and Apple's approach. I only wish they picked me to write it. The first book out on HyperTalk was from Walking Shadow Press: "Programming with HyperTalk". The Walking Shadow book is distributed by a couple guys at Apple and to this day has no real distribution in book stores and so is not really a valid entry (yet). Content wise it has some interesting scripts, and the first three chapters are pretty good, but it falls apart after that. The Shafer book (HyperTalk Programming/Hayden/Sams) was the first HyperTalk programming title from a major publisher (Sams is owned by Macmillan, and Sams owns Hayden). This book was written very fast (under 4 months), was desktop published by the author to avoid the slow typesetting production cycle, and has sold over 30,000 copies in just three months. In my opinion the Shafer book is lacking in many ways. First, its organization is all over the map, it covers too many things, from stack products to authoring. The sequence for teaching the subjects of HyperTalk is non-linear, things are lumped together in a confusing fashion, like lumping all the dialog boxes in one chapter, when they are really related to many different topics. I also find it dumps at the very beginning a huge amount of detailed authoring information that is easily forgotten and doesn't belong in the book. I find the examples to be lacking and that they don't represent typical problems I run across when I program. There are NO flowcharts anywhere, not even in the section on control structures. There are only two project scripts, one is an education quizzer the other a script writing prototyper neither of which are very useful in my opinion. There are few tricks, little about me, and so on. The best part to me is the 23 page Appendix A which gives the entire HyperTalk vocabulary, its syntax and notes about each command, function and keyword. Its also expensive ($24.95 for a book on a product that cost $49.00). On the other hand if I had to teach HyperTalk today there is little else to choose from so I would probably turn to this book, but create my own path for the student to follow rather than follow its table of contents. The way things will turn out is that the Shafer book will sell until a better book comes out. My company is working on such a HyperTalk programming book that is combination tutorial and reference. Our formula is not to be the first out, but to be the best out. The book has three top notch authors all working together, meeting every week to review each others progress. We spent two months just designing the outline. Every example is written and tested before the text is written. Every chapter builds on previous knowledge and no ad-hoc assumptions are made. We have three separate technical reviewers at work. There is a stack being written that will give a HyperCard version of the information (much like the way the Texas database program works). There will be a section containing the 100 most asked programming questions about HyperTalk, garnished by long hours studying the pattern of questions people ask nets (BIX, usenet, and Compuserve). It will be reviewed by Dan Winkler. The first two thirds is tutorial. The second third of the book is a reference of each command, with three examples in a standard format (each example more complex than the one before it). Caveats, bugs, tips and warnings are given for each command. There is a quicker short form reference to, and jump tables on the inside and back covers contain all the keywords with pointers to the reference and tutorial pages. The book is called HyperTalk Bible and will be available in the fall of this year. I will announce the book when it is out here and on the other nets. There will be other good books out on HyperTalk. Publishers like Microsoft Press and Que tend to do a very good job on their titles. Bantam is hard at work on two more script titles from Goodman and I know for a fact that every major computer book publisher has at least two HyperTalk books in the works. By the first quarter of 89 I would expect about 20 books to be on the market. Hopefully this will clear up some of the thoughts people may be having about the early HyperTalk books. In the end I think that Dan Winkler is correct when he asks people to be careful when they say a book is good. What are your qualifications for saying a book is good? Good compared to what? A detailed analysis of a book is even more difficult than analysis of a software program. There is no way to test if a book "crashes", unless you sit and type in the scripts, and how many reviewers do that? -- part contents for background part 45 ----- text ----- HyperTalk Books, How to Judge?